20 research outputs found

    Language-based sensing descriptors for robot object grounding

    Get PDF
    In this work, we consider an autonomous robot that is required to understand commands given by a human through natural language. Specifically, we assume that this robot is provided with an internal representation of the environment. However, such a representation is unknown to the user. In this context, we address the problem of allowing a human to understand the robot internal representation through dialog. To this end, we introduce the concept of sensing descriptors. Such representations are used by the robot to recognize unknown object properties in the given commands and warn the user about them. Additionally, we show how these properties can be learned over time by leveraging past interactions in order to enhance the grounding capabilities of the robot

    Teaching robots parametrized executable plans through spoken interaction

    Get PDF
    While operating in domestic environments, robots will necessarily face difficulties not envisioned by their developers at programming time. Moreover, the tasks to be performed by a robot will often have to be specialized and/or adapted to the needs of specific users and specific environments. Hence, learning how to operate by interacting with the user seems a key enabling feature to support the introduction of robots in everyday environments. In this paper we contribute a novel approach for learning, through the interaction with the user, task descriptions that are defined as a combination of primitive actions. The proposed approach makes a significant step forward by making task descriptions parametric with respect to domain specific semantic categories. Moreover, by mapping the task representation into a task representation language, we are able to express complex execution paradigms and to revise the learned tasks in a high-level fashion. The approach is evaluated in multiple practical applications with a service robot

    Graph-based task libraries for robots: generalization and autocompletion

    Get PDF
    In this paper, we consider an autonomous robot that persists over time performing tasks and the problem of providing one additional task to the robot's task library. We present an approach to generalize tasks, represented as parameterized graphs with sequences, conditionals, and looping constructs of sensing and actuation primitives. Our approach performs graph-structure task generalization, while maintaining task ex- ecutability and parameter value distributions. We present an algorithm that, given the initial steps of a new task, proposes an autocompletion based on a recognized past similar task. Our generalization and auto- completion contributions are eective on dierent real robots. We show concrete examples of the robot primitives and task graphs, as well as results, with Baxter. In experiments with multiple tasks, we show a sig- nicant reduction in the number of new task steps to be provided

    Interactive semantic mapping: Experimental evaluation

    Get PDF
    Robots that are launched in the consumer market need to provide more effective human robot interaction, and, in particular, spoken language interfaces. However, in order to support the execution of high level commands as they are specified in natural language, a semantic map is required. Such a map is a representation that enables the robot to ground the commands into the actual places and objects located in the environment. In this paper, we present the experimental evaluation of a system specifically designed to build semantically rich maps, through the interaction with the user. The results of the experiments not only provide the basis for a discussion of the features of the proposed approach, but also highlight the manifold issues that arise in the evaluation of semantic mapping

    Knowledge Representation for Robots through Human-Robot Interaction

    Full text link
    The representation of the knowledge needed by a robot to perform complex tasks is restricted by the limitations of perception. One possible way of overcoming this situation and designing "knowledgeable" robots is to rely on the interaction with the user. We propose a multi-modal interaction framework that allows to effectively acquire knowledge about the environment where the robot operates. In particular, in this paper we present a rich representation framework that can be automatically built from the metric map annotated with the indications provided by the user. Such a representation, allows then the robot to ground complex referential expressions for motion commands and to devise topological navigation plans to achieve the target locations.Comment: Knowledge Representation and Reasoning in Robotics Workshop at ICLP 201

    Multi-robot task acquisition through sparse coordination

    Get PDF
    Abstract — In this paper, we consider several autonomous robots with separate tasks that require coordination, but not a coupling at every decision step. We assume that each robot sep-arately acquires its task, possibly from different providers. We address the problem of multiple robots incrementally acquiring tasks that require their sparse-coordination. To this end, we present an approach to provide tasks to mul-tiple robots, represented as sequences, conditionals, and loops of sensing and actuation primitives. Our approach leverages principles from sparse-coordination to acquire and represent these joint-robot plans compactly. Specifically, each primitive has associated preconditions and effects, and robots can con-dition on the state of one another. Robots share their state externally using a common domain language. The complete sparse-coordination framework runs on several robots. We report on experiments carried out with a Baxter manipulator and a CoBot mobile service robot. I

    Approaching Qualitative Spatial Reasoning About Distances and Directions in Robotics

    No full text
    One of the long-term goals of our society is to build robots able to live side by side with humans. In order to do so, robots need to be able to reason in a qualitative way. To this end, over the last years, the Artificial Intelligence research community has developed a considerable amount of qualitative reasoners. The majority of such approaches, however, has been developed under the assumption that suitable representations of the world were available. In this paper, we propose a method for performing qualitative spatial reasoning in robotics on abstract representations of environments, automatically extracted from metric maps. Both the representation and the reasoner are used to perform the grounding of commands vocally given by the user. The approach has been verified on a real robot interacting with several non-expert users

    Multi-robot search for a moving target: Integrating world modeling, task assignment and context

    No full text
    In this paper, we address coordination within a team of cooperative autonomous robots that need to accomplish a common goal. Our survey of the vast literature on the subject highlights two directions to further improve the performance of a multi-robot team. In particular, in a dynamic environment, coordination needs to be adapted to the different situations at hand (for example, when there is a dramatic loss of performance due to unreliable communication network). To this end, we contribute a novel approach for coordinating robots. Such an approach allows a robotic team to exploit environmental knowledge to adapt to various circumstances encountered, enhancing its overall performance. This result is achieved by dynamically adapting the underlying task assignment and distributed world representation, based on the current state of the environment. We demonstrate the effectiveness of our coordination system by applying it to the problem of locating a moving, non-adversarial target. In particular, we report on experiments carried out with a team of humanoid robots in a soccer scenario and a team of mobile bases in an office environment

    Disambiguating localization symmetry through a Multi-Clustered Particle Filtering

    No full text
    Distributed Particle filter-based algorithms have been proven effective tools to model non-linear and dynamic processes in Multi Robot Systems. In complex scenarios, where mobile agents are involved, it is crucial to disseminate reliable beliefs among agents to avoid the degradation of the global estimations.We present a cluster-based data association to boost the performance of a Distributed Particle Filter. Exploiting such data association, we propose a disambiguation method for the RoboCup scenario robust to noise and false perceptions. The results obtained using both a simulated and a real environment demonstrate the effectiveness of the proposed approach

    Context-based coordination for a multi-robot soccer team

    No full text
    The key issue investigated in the field of Multi-Robot Systems (MRS) is the problem of coordinating multiple robots in a common environment. In tackling this issue, problems concerning the capabilities of multiple heterogeneous robots and their environmental constraints need to be faced. In this paper, we introduce a novel approach for coordinating a team of robots. The key contribution of the proposed method consists in exploiting the rules governing the scenario by identifying and using “contexts”. The robots actions and perceptions are specialized to the current context to enhance both single and collective behaviors. The presented approach has been largely validated in a RoboCup scenario. In particular, we adopt a soccer environment as a testing ground for our algorithm. We evaluate our method in several testing sessions on a simulator representing a virtual model of a soccer field. The obtained results show a substantial improvement of the team adopting our algorithm
    corecore